Introduction

Overview and motivation

Video surveillance (CCTV) is a technology that is nowadays deeply woven into the everyday life of many people as one tends to expect it in many varied circumstances (Ossola, 2019). The rationale behind the installation of these systems seems to be very clear for governments. For example, on Buffalo’s (NY) open data website, one can read that “the City of Buffalo deploys a real-time, citywide video surveillance system to augment the public safety efforts of the Buffalo Police Department”. Yet, the development of this new technology, is not exempt from any controversy. For instance, many observers claim that the expansion of video surveillance poses an unregulated threat to privacy (ACLU, 2021). Still, many people seem to be willing to accept this loss in privacy as the surge in video surveillance makes them feel safer (Madden & Rainie, 2015).

Throughout this research, we challenge the widespread belief that people who have “nothing to hide” should be content with the expansion of CCTV networks as the latter makes them safer (Madden & Rainie, 2015). Indeed, on top of many privacy issues linked with this surge in video surveillance systems, one might legitimately ask the question whether these cameras actually make people safer?

The goal of this project in the first phase is to investigate the crime deterrent potential of CCTVs in an Amercian city. This potential will also be compared to the different types of crime that are committed in this area. Over a second phase, the dispersion of CCTVs within the city will be investigated. Indeed, according to some researches, mass surveillance has a stronger impact on communities already disadvantaged by their poverty, race, religion, ethnicity, or immigration status (Gellman & Adler-Bell, 2017). We would like to see whether our data enables us to validate or invalidate this theory. It would also be extremely interesting, even though challenging, to see whether the installation of surveillance systems could potentially create even more pernicious issues such as crime displacements (Waples, Gill & Fisher, 2009).

In sum we argue that, in a world where CCTVs and other surveillance systems are flourishing, it might be beneficial to take a step back and question both the efficacy and the implementation design of such technologies, since they are often portrayed by different stakeholders as miraculous solutions to very complex issues.

Backgrounds

Augustin: Augustin obtained a degree in Business Administration at the University of St-Gallen where he had the opportunity to develop a strong interest in digital business ethics. He wrote his bachelor’s thesis on the privacy implications of the use of fear appeals in home surveillance devices’ marketing strategy.

Marine: Marine made a bachelor in Law at the UBO (Université de Bretagne-Occidentale). She is presently into the Master DCS (Droit, Criminalité et Sécurité des technologies de l’information) at the Unversity of Lausanne. Last year, she had the opportunity to take a data protection course and learn more about cyber security and crime in general.

Daniel: Daniel is an exchange student from Koblenz, Germany. Daniel obtained a bachelor’s degree in Business Administration/Management at the WHU - Otto Beisheim School of Management, Germany. He is currently pursuing a Master of Management with focusing on family businesses, entrepreneurship and data science in his courses. Interestingly regarding this project, Daniel spend several months in the United states after high school and thus he can relate to the topic about police violence and crimes in the US.

Motivations

Firstly, from our respective backgrounds, we derive a strong interest in new technologies and privacy. We believe that every person is entitled to the fundamental right to privacy. Unfortunately, one observes an increasing tendency of governments and other stakeholders (e.g. businesses such as GAFA (Google, Amazon, Facebook, Apple)) to take more and more control in our daily lives through digital technologies such as cameras, computers or smartphones. For these reasons it is interesting to ask ourselves if this massive collection of our data leads to more security or more restrictions of our freedom.

Secondly, if we look at European law like the GDPR, collection and processing of our data must be proportionate to the purpose of that processing. Therefore, it is of our interest to determine if these applications are the same in the United States and to see if the installation of cameras, with the objective of security, really allows to reduce crime and to make a city more secure.

Thirdly, it must also be said that crime and the legislative discussions regarding the right to wear a gun in the United-States are fascinating. At first, it seems as if the freedom to carry a gun makes the US more prone to crimes such as mass shootings. To verify or falsify our hypotheses, we also want to see through the datasets we obtained, what kind of crime prevails in American cities and how it evolves according to the districts and their particularities.

Research questions

  1. Does the presence of CCTVs in a given area actually deter crime?
  2. What types of crimes may be deterred by surveillance cameras?
  3. Is the impact of CCTV installation on crime reduction higher/lower/same in higher income neighborhoods compared to lower income neighborhoods?
  4. Are there more public cameras in lower income/higher unemployment areas compared to higher income/employment areas? (Does the government respect privacy issues depending on your income level?)
  5. Do we observe crime displacement issues caused by the installation of CCTV in some neighbourhoods?
  6. Is there a relationship between internet accessibility of a neighbourhood and crimes/CCTV installations?

Data

Data source

We have six raw data sets. All data sets were retrieved on Baltimore government’s open data portal. We found data about crimes committed in Baltimore, CCTV location in the city and poverty ratesm the population and the households with internet access. We also found a data set showing the reference boundaries of the Community Statistical Area geographies. The latter will certainly be helpful to match each data set’s observations together.

Raw Data sets

2.1 Crime Data set

This dataset represents the location and characteristics of major crime against persons such as homicide, shooting, robbery, aggrevated assault etc. within the city of Baltimore. This data set contains 350’294 observations.

  • RowID = ID of the row, 350’294 in total

  • CrimeDateTime = date and time of the crime. Format yyyy/mm/dd hh:mm:sstzd

  • CrimeCode = Code corresponding to the type of crime committed

  • Location = Textual information on where the crime was committed

  • Description = Textual description of the crime committed corresponding to a CrimeCode.

  • Inside/Outside = Provides information on whether crime was committed inside or outside

  • Weapon = Provides details on what weapon has been used, if any

  • Post = Number corresponding to the Police Post concerned. A map with corresponding police posts can be found here: http://moit.baltimorecity.gov/sites/default/files/police_districts_w_posts.pdf?__cf_chl_captcha_tk__=pmd_NhnE710SS8QEWdKOyT5Ug6IJZGoF6iIntFYY30vctes-1634309136-0-gqNtZGzNAxCjcnBszQPl

  • District = Name of the district, regrouping different neighbourhoods. Baltimore is officially divided into nine geographical regions: North, Northeast, East, Southeast, South, Southwest, West, Northwest, and Central.

  • Neighborhood = Name of the neighborhood in which the crime was committed. Most names matches with neighborhood names contained in the dataset about Community Statistical Areas.

  • Latitude = Latitude, Coordinate system: EPSG:4326 WGS 84

  • Longitude = Longitude, Coordinate system: EPSG:4326 WGS 84

  • GeoLocation = Combination of latitude and longitude, Coordinate system: EPSG:4326 WGS 84

  • Premise = Information on the premise where the crime was committed. One counts more than 120’000 observations in the streets.

crime_data <- read.csv(file = here::here("data/Baltimore_Part1_Crime_data.csv"))

Source of the data set: [https://data.baltimorecity.gov/datasets/part1-crime-data/explore]

2.2 CCTV Data set

This dataset represents closed circuit camera locations capturing activity within 256ft (~2 blocks). It contains 837 observations in total.

  • X = Longitude: Coordinate system: EPSG:3857 WGS 84 / Pseudo-Mercator

  • Y = Latitude: Coordinate system: EPSG:3857 WGS 84 / Pseudo-Mercator

  • OBJECTID = ID of of the camera, 837 in total

  • CAM_NUM = Unique number attributed to the camera. This might suggest that the data set does not show the location of every camera in Baltimore.

  • LOCATION = Textual information on where the camera is located

  • PROJ = Name of the area in which the camera is located. It does not always match the name of the “standard” community statistical areas.

  • XCCORD = Longitude, Coordinate system: EPSG:4326 WGS 84

  • YCOORD = Latitude, Coordinate system: EPSG:4326 WGS 84

cctv_data <- read.csv(file = here::here("data/Baltimore_CCTV_Locations_Crime_Cameras.csv"))

Source of the data set: [https://data.baltimorecity.gov/datasets/cctv-locations-crime-cameras/explore]

2.3 Poverty Data set

This dataset provides information about the percent of family households living below the poverty line. This indicator measures the percentage of households whose income fell below the poverty threshold out of all households in an area.

Federal and state governments use such estimates to allocate funds to local communities. Local communities use these estimates to identify the number of individuals or families eligible for various programs. These information will be useful for us to study the dispersion of CCTVs within Baltimore in comparison to the poverty level in a given area. This dataset contains 55 observations, one percentage for each community statistical area. There seems to only be one NA. The most relevant variables are the following:

  • CSA2010 = name of the community statistical area. The Baltimore Data Collaborative and the Baltimore City Department of Planning divided Baltimore into 55 CSAs. These 55 units combine Census Bureau geographies together in ways that match Baltimore’s understanding of community boundaries, and are used in social planning.

  • hhpov15 - hhpov19 = each these five column contains the percent of Family Households Living Below the Poverty Line for a given year, from 2015 to 2019.

  • Shape_Area - Shape_Length = standard fields to determine the area and the perimeter of a polygon

poverty_data <- read.csv(file = here::here("data/Percent_of_Family_Households_Living_Below_the_Poverty_Line.csv"))

Source of the data set: [https://arcg.is/1qOrnH]

2.4 Area Data set

This dataset provides information about the Community Statistical Area geographies for Baltimore City. Based on aggregations of Census tract (2010) geographies. It will serve as a geographical point of reference for us to match each dataset’s observations together. This dataset contains 55 observations, one for each of area. The most relevant variables are the following:

area_data <- read_csv(file = here::here("data/Community_Statistical_Areas__CSAs___Reference_Boundaries.csv"))

Source of the data set: [https://data.baltimorecity.gov/datasets/community-statistical-area-1/explore?location=39.284605%2C-76.620550%2C12.26]

2.5 Population Data set

This data set provides information about the population in each Community Statistical Area. Information about the total population in 2010 and 2020 are provided. It will be useful to calculate values per capita in each community.The most relevant variables are the following:

  • community = name of the community statistical area. The Baltimore Data Collaborative and the Baltimore City Department of Planning divided Baltimore into 55 CSAs. These 55 units combine Census Bureau geographies together in ways that match Baltimore’s understanding of community boundaries, and are used in social planning.

  • tpop20 = total population in for each Community Statistical Area in 2020

population_data <- read.csv(file = here::here("data/Total_Population.csv"))

Source of the data set: [INSERT SOURCE HERE]

2.6 Household Internet Data set

This data set give information about percentage of households with no internet in each of the 55 Community statistical areas. This information is provided for the years 2017, 2018 and 2019. This will be useful to detect whether there is an relationship between internet access and crimes or CCTV installations in neighborhoods. The most important variables are:

  • CSA2010 = name of the community statistical area.

  • nohhint17 = percentage of household in this particular neighboorhood with no internet access in the year 2017.

  • nohhint18 = percentage of household in this particular neighboorhood with no internet access in the year 2018.

  • nohhint19 = percentage of household in this particular neighboorhood with no internet access in the year 2019.

  • Shape_Area = standard fields to determine the area and the perimeter of a polygon

  • Shape_Lenght = standard fields to determine the area and the perimeter of a polygon

Percent_of_Households_with_No_Internet_at_Home <- read.csv(file = here::here("data/Percent_of_Households_with_No_Internet_at_Home.csv"))

Source of the data set: [INSERT SOURCE HERE]

2.7 Data Wrangling

2.7.1 Data Wrangling: Area

Here, the main goal is the transformation of the area data set into a new data set, which contains one observation per neighborhood. Indeed, it is important to distinguish neighborhoods which are smaller areas from communities, which are larger and often contain several neighborhoods. We achieve that by first creating a new data set with each neighborhood being assigned to a community using separate_rows and second establishing a new columns with lower case letter for later merge.To do so, we combine the mutate function with tolower which convert the uppercase letters of string to a lowercase string.

area_data2 <- separate_rows(area_data, Neigh, sep = ", ") #Creation of a new data set with each neighborhood being assigned to an area

area_data2 <- mutate(area_data2,neigh=tolower(Neigh)) #Creation of new column with lower case letters

2.7.2 Data Wrangling: Crime

As in the crime data set the neighborhood names are written in lower case letters we again create a column with lower case letters to join the two data sets. We join the area data set and the crime data set using left_join. Next, we use the anti_join function to understand which observation has not matched. The outcome shows all the neighborhoods which did not match. As shown below, the issues mostly come from spelling difference (e.g.: Mount written Mt.). As we have very few observations which do not match, we change the names manually.

  • mount washington \(→\) Mt. Washington
  • carroll - camden industrial area \(→\) Caroll-Camden Industrial Area
  • patterson park neighborhood \(→\) Patterson Park
  • glenham-belhar \(→\) Glenham-Belford
  • new southwest/mount clare \(→\) Hollins Market
  • mount winans \(→\) Mt.Winans
  • rosemont homeowners/tenants \(→\) Rosemont
  • broening manor \(→\) O’Donnell Heights
  • boyd-booth \(→\) Booth-boyd
  • lower herring run park \(→\) Herring Run Park
  • mt pleasant park \(→\) Mt. Pleasant Park
crime_data <- mutate(crime_data,neigh=tolower(crime_data$Neighborhood)) #Creation of new column with lower case letters

crime_data_with_areas <- crime_data %>% 
  left_join(area_data2,by="neigh") #We create a new data sets that contains the name of the area in which the crime was committed

crime_data_NAs <- crime_data %>% 
  anti_join(area_data2,
            by="neigh") #Here is the list of all the NAs we have

unique(crime_data_NAs$neigh) #We see that we have very few unassigned names, we can change this by hand.

crime_data["neigh"][crime_data["neigh"]=="mount washington"] <- "mt. washington"
crime_data["neigh"][crime_data["neigh"]=="carroll - camden industrial area"] <- "caroll-camden industrial area"
crime_data["neigh"][crime_data["neigh"]=="patterson park neighborhood"] <- "patterson park"
crime_data["neigh"][crime_data["neigh"]=="glenham-belhar"] <- "glenham-belford"
crime_data["neigh"][crime_data["neigh"]=="new southwest/mount clare"] <- "hollins market"
crime_data["neigh"][crime_data["neigh"]=="mount winans"] <- "mt. winans"
crime_data["neigh"][crime_data["neigh"]=="rosemont homeowners/tenants"] <- "rosemont"
crime_data["neigh"][crime_data["neigh"]=="broening manor"] <- "o'donnell heights"
crime_data["neigh"][crime_data["neigh"]=="boyd-booth"] <- "booth-boyd"
crime_data["neigh"][crime_data["neigh"]=="lower herring run park"] <- "herring run park"
crime_data["neigh"][crime_data["neigh"]=="mt pleasant park"] <- "mt. pleasant park"

#We got rid of the 764 remaining observations which had no information about neighbourhood

We get rid of the 764 remaining observations which had no information about neighborhood. This represent a very tiny portion of our total number of observations. Finally, we use the semi join function to create the final data sets which in total is basically the same data set as the original one minus the 764 observations.

Finally, we want to get rid of the observations dating before 2000, as the the Baltimore CCTV program started in the year 2000. We first check the structure of the data set using the str function. We notice that the CrimeDateTime column is not a date. We change that and finally filter the information we want to keep using filter.

crime_data_with_areas <- crime_data %>% 
 semi_join(area_data2,by="neigh") %>% 
  left_join(area_data2,by="neigh") #Here we have the final data frame with a community for each crime

str(crime_data_with_areas) # We see that the crime CrimeDateTime column is not a date. We thus convert it.

crime_data_with_areas$CrimeDateTime <-  as.Date(crime_data_with_areas$CrimeDateTime)

crime_data_with_areas <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2000-01-01")) #We had 24 observations that dates back to before the year 2000 and 24 observation with no date. We only select crime committed after 2000 as the CCTV program in Baltimore started in 2000.

2.7.3 Data Wrangling: Poverty

56 areas are included in the standard community statistical area system. However, within these 56 statistical areas is also jail included. For the poverty data however, we obviously have only 55 statistical areas provided, since we obviously do not have data about poverty in jail. To solve this inconsistency, we add a new line. Moreover we needed to fill a missing value for Baltimore in the year 2019: Here we took the average of the past years.

poverty_data <- rbind(poverty_data,list(56,"Unassigned -- Jail",0,0,0,0,0,0,0))

poverty_data[48,7] <- c(poverty_data[48,3],poverty_data[48,4],poverty_data[48,5],poverty_data[48,6]) %>% mean() #The poverty rate of South Baltimore in 19 was missing. This area's rate over the past years seems to be stable (always one of the richest area), that's why we compute the mean of the past 4 years to replace the missing value.

2.7.4 Data Wrangling: CCTV

This data set seems rather tidy, we will mostly use the first two columns which contain information about the location of each CCTV. Therefore,we still need to make sure to not have any missing values in these two columns. We do so by combination the whichand the is.nafunction and by filtering for potential empty observations.

which(is.na(cctv_data$X))
#> integer(0)
which(is.na(cctv_data$Y))
#> integer(0)
filter(cctv_data, cctv_data$X=="")
#>  [1] X                Y                OBJECTID        
#>  [4] CAM_NUM          NOTES            LOCATION        
#>  [7] PROJ             XCOORD           YCOORD          
#> [10] created_user     created_date     last_edited_user
#> [13] last_edited_date
#> <0 rows> (or 0-length row.names)
filter(cctv_data, cctv_data$Y=="") 
#>  [1] X                Y                OBJECTID        
#>  [4] CAM_NUM          NOTES            LOCATION        
#>  [7] PROJ             XCOORD           YCOORD          
#> [10] created_user     created_date     last_edited_user
#> [13] last_edited_date
#> <0 rows> (or 0-length row.names)
#We are not sure it is the proper technique but by doing so we ensure that we have no NAs neither empty values and so that our data set is tidy.

2.7.5 Data Wrangling: Household internet in CSA’s

On the first sight, this Household internet datasets from Baltimore looks very tidy. Nevertheless, we quickly run some code to try filter out missing values or detect anomalies.

sum(is.na(Percent_of_Households_with_No_Internet_at_Home))
#> [1] 0

Having examined the sum of NA’s, we see that this dataset is clean and since there are only 55 rows, jail was also automatically not included when configurating this dataset about interenet access (which makes sense, since jail has probably it own internet access, but has no households to count).

Exploratory data analysis

3.1 Calculation of the density of CCTV per community

The original CCTV data set which we observed had a slight challenge. Although it contained some neighborhood names, most of them were not matching the “standard neighborhood” names. There, to solve that we involved geospatial counting.

Our procedure included the following steps. After reading the table and converting the data into a data table, we define what will be the coordinates of the newly created spatial file. Here we have several types of coordinates, we use X and Y which use the EPSG:3857 WGS 84 / Pseudo-Mercator coordinate system. Spatial files must have coordinate systems assigned to them. In the case at hand, we will work with the above mentioned EPSG:3857 WGS 84 / Pseudo-Mercator coordinate system for all the spatial files that we are going to use. Therefore, to ensure consistency, we create a crs object called crs.geo1 that is going to be assigned to all the spatial files we will use. In order to assign a known crs to spatial data, we use the proj4string function, to which we assign crs.geo1.

#read in data table
balt_dat <-  fread(file = here::here("data/Baltimore_CCTV_Locations_Crime_Cameras.csv"))

#convert to data table
balt_dat <- as.data.table(balt_dat)

#make data spatial
coordinates(balt_dat) <-  c("X","Y")
crs.geo1 <-  CRS("+proj=merc +a=6378137 +b=6378137 +lat_ts=0 +lon_0=0 +x_0=0 +y_0=0 +k=1 +units=m +nadgrids=@null +wktext +no_defs +type=crs")
proj4string(balt_dat) <-  crs.geo1  

Then we plot to see the output (as cloud of points which represent all the CCTVs).

plot(balt_dat, pch = 20, col = "steelblue") #We can use the plot function to quickly plot the SpatialPointDataFrame that we created. We see a bunch of points which represent the CCTV location in Baltimore.

Next, we have to work with the shapefile which is another special type of file. Basically it is a set of polygons which represent different areas of the city Baltimore. We downloaded this file on the Open Baltimore Portal. We read it in and assign this file again to our crs.geo1 coordinate system. In this way we have assured that our files have the same coordinate system.

#read in shapefile of baltimore
baltimore <-  readOGR(dsn = here::here("data/Community_Statistical_Area"), layer = "Community_Statistical_Area") #name of file and object
proj4string(baltimore) <- crs.geo1

We can now plot these two spatial files together to see the spread of CCTVs over the 56 community statistical areas.

#plot
plot(baltimore,main="Spread of CCTVs in different communities of Baltimore")
plot(balt_dat,pch=20, col="steelblue" , add=TRUE) #If we plot these two lines together, what we obtain is a map of baltimore, we have the 56 community statistical areas and the CCTVs on top of the map.

To illustrate these results numerically, we need R to count for us how many CCTV belongs to which area. Here, the function over counts how many CCTVs are layed over a certain polygon frame. Next, we create a new object called counts and make it into a data frame (so that it is easier for us to work with it). We use sum to ensure that we well and truly have 836 observations which were counted. This is the case so we are happy. Still we notice that we only have 41 rows, meaning there here are only 41 out of 56 areas where there are some CCTV.

#Perform the count
proj4string(balt_dat)
proj4string(baltimore) #To be able to perform the count, we must ensure that the two spatial files have a similar CRS. This is the case as we attributed these two files "crs.geo1" 

res2 <- over(balt_dat,baltimore) #This function tells you to which community each CCTV belongs to
counts <- table(res2$community)
counts <- as.data.frame(counts)
colnames(counts)[1] <- "Community"
sum(counts$Freq) #We see that we have 836 observation in total, this is a good sign as our initial CCTV data set contained 836 obesrvations

To make that workable, we need to create a new CCTV file, from which we just add 0 to each N.A.-location. Lastly, we create a new column with the mutate function to calculate the CCTV-density which shows the amount of CCTV per area divided by the total amount of CCTV.

CCTV_per_area <- area_data[2] %>% 
  left_join(counts,by="Community") #One must add the communities where there are no counts i.e no CCTV

CCTV_per_area[is.na(CCTV_per_area)] <- 0

CCTV_per_area <- mutate(CCTV_per_area, density_perc=(CCTV_per_area$Freq/(sum(CCTV_per_area$Freq)))*100)

3.1.1 Mapping of CCTV density

We now want to map CCTV density on the Baltimore map. We first have to use the piping operator to ensure that the community that we have in the Baltimore data set are the same as the one we are having in the CCTV per are data set. As this only returns true values that means that it works and is good for further analysis.

library(tmap)
baltimore$community %in% CCTV_per_area$Community

Next, we perform a left_join between the Baltimore shape file and the CCTV per area data set. To hedge against the different writing styles (one time it is written with a capital letter and one time with a small letter), we use the vector in the end. Finally, we create the map with the tmap package. The tmap package somehow works as the ggplot2 package: First, we need to define an element, it always starts with the tm_shape argument, and then you can add with the plus operator as many arguments as you wish. We used the Baltimore shape file, filled it with the density percentage, defined some breaks, set the borders and the finally the layout.

baltimore@data <- left_join(baltimore@data, CCTV_per_area, by = c('community' = 'Community'))

CCTV_dens_map <- tm_shape(baltimore) + tm_fill(col = "density_perc", title ="CCTV density per Area in %", breaks=c(0,1,2,3,4,5,6,7,8,9,10,11)) + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)

tmap_mode("plot")
CCTV_dens_map

3.2 Calculation of the crime per capita per community

What we create is the CrimeStatsPerArea. To achieve that we group the crime_data_with_areas data set by community and then use summarize which enables us to compute the crime frequency for each area. Then, using the population data, we can divide the crime frequency by the number of inhabitants in each area. We finally multiply this by 1000 to obtain the crime per 1000 inhabitants. Again, we added one more row in the calculations because we have no values for the prison. To make sure we made no mistake, we add up the CrimeFrequency column to see whether it equals to 349482. This is the case. We can therefore go further confidently.

CrimeStatsPerArea <- crime_data_with_areas %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency=n())

CrimeStatsPerArea <-  mutate(CrimeStatsPerArea,CrimePer1000inhabitants=((CrimeStatsPerArea$CrimeFrequency/population_data$tpop20)*1000))

CrimeStatsPerArea <- rbind(CrimeStatsPerArea,list("Unassigned -- Jail",0,0))  #We have no information about crimes committed in jail, yet, the community statistical area encompass 56 area, including jail. In order to ensure consistency, we must add a 56th observation in this data frame.

sum(CrimeStatsPerArea$CrimeFrequency) #The total sum is 349482, which is what we expect

Community_data <- CrimeStatsPerArea[,-2] %>% 
  left_join(CCTV_per_area,by="Community") %>%
  left_join(poverty_data[,c(2,7)],by=c("Community"="CSA2010"))

3.2.1 Mapping of crime per capita per community

We want to map crimes per capita per community. The methodology is the same as we did for CCTV density. This time, we use the “quantile” method to create category breaks.

library(tmap)

baltimore$community %in% CrimeStatsPerArea$Community #We see that we have a perfect match

baltimore@data <- left_join(baltimore@data, CrimeStatsPerArea, by = c('community' = 'Community'))

Crime_per_capita_map <- tm_shape(baltimore) + tm_fill(col = "CrimePer1000inhabitants", title ="Crime per capita",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)

tmap_mode("plot")
Crime_per_capita_map

3.2.2 Creation of a distorted map

To observe crime per capita per community repartition in Baltimore visually, we decided to use a distorted map. Again, we use the tmap package together with the cartogram_ncont function which basically distort the map based on intensity of crime per capita in each community. Concretely, we want to show that the crime per capita is higher in the city center, compared to the suburban areas. This can be shown quite neatly graphically.

Distorted_Crime_map <- tm_shape(cartogram_ncont(baltimore, "CrimePer1000inhabitants"))+tm_fill(col = "CrimePer1000inhabitants", title ="Crime per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.07) #This map distorts the size of each area depending on their respective crime per capita It is interesting as it enables one to see that higher crime per capita tends to be concentrated in the city center.

tmap_mode("plot")
Distorted_Crime_map

3.3 Calculation of crime per capita by type of crime

First thing we do here is to compute the unique values of the description column of the crime data set. We see that we have 14 types of crime. We want to observe crimes by types, therefore we want to make new classifications. The law consists of three basic classifications of criminal offenses including infractions, misdemeanors, and felonies. In our data set, we have no infractions. The 14 types of crime are divided in this way into the two remaining categories.

  • Misdemeanor: LARCENY FROM AUTO,COMMON ASSAULT, ROBBERY - COMMERCIAL, LARCENY
  • Felony: RAPE, ARSON, HOMICIDE, BURGLARY, AUTO THEFT, ROBBERY - CARJACKING, AGG. ASSAULT, ROBBERY - STREET, ROBBERY - RESIDENCE, SHOOTING
unique(crime_data_with_areas$Description)

#We see that we have 14 types of crime. We want to observe crimes by types, therefore we want to make new classifications.The law consists of three basic classifications of criminal offenses including infractions, misdemeanors, and felonies. In our data set, we have infractions.

#Misdemeanor:LARCENY FROM AUTO,COMMON ASSAULT, ROBBERY - COMMERCIAL, LARCENY
#Felony: RAPE, ARSON, HOMICIDE, BURGLARY, AUTO THEFT, ROBBERY - CARJACKING, AGG. ASSAULT, ROBBERY - STREET, ROBBERY - RESIDENCE, SHOOTING

Next we create a data set which is called crime_cat and basically tells you which recorded crime type belongs to which crime category. This data set will be used later to make a left joint with the crime_data_per_area. Finally, we are left with the crime data sets with the area datas et with a new column which concerns whether the crime was a felony or a misdemeanor.

crime_cat <- data.frame(Category=c("Misdemeanor","Felony"), Description=c(c("LARCENY FROM AUTO,COMMON ASSAULT,ROBBERY - COMMERCIAL,LARCENY"),c("RAPE,ARSON,HOMICIDE,BURGLARY,AUTO THEFT,ROBBERY - CARJACKING,AGG. ASSAULT,ROBBERY - STREET,ROBBERY - RESIDENCE,SHOOTING")))

crime_cat <- separate_rows(crime_cat, Description, sep = ",")

crime_cat$Description %in% unique(crime_data_with_areas$Description) #Ensure we have a perfect match

crime_data_with_areas <- crime_data_with_areas %>% 
  left_join(crime_cat,by="Description") #We had a new variable to our crime data set

Next, we compute the Crime_PerCategory_PerArea. Here we use the piping operator and this time we group by the community and category and obtain the results. Again, we check that we indeed have 349482 observations. Moreover, from that we compute both felony and misdemeanors per capita in each community and (again) add the prison line into the newly created data sets.

CrimePerCategoryPerArea <- crime_data_with_areas %>% 
  group_by(Community,Category) %>%
  summarize(RepartitionPerCategoryPerArea=n())

sum(CrimePerCategoryPerArea$RepartitionPerCategoryPerArea) #Again, we check that we indeed have 349482 observations

CrimeCategoryRepartition <- CrimePerCategoryPerArea %>% 
  group_by(Category) %>% 
  summarise(Repartition=sum(RepartitionPerCategoryPerArea)) #We observe that in Baltimore, the number of felony is close to the number of misdemeanor

FelonyStats <-  CrimePerCategoryPerArea %>% filter(Category=="Felony") 

FelonyStats$FelonyPerCapitaPerArea <-((CrimePerCategoryPerArea%>% filter(Category=="Felony"))[[3]]/population_data$tpop20)*1000

FelonyStats[56,] <- list("Unassigned -- Jail","Felony",0,0)

MisdemeanorStats <-  CrimePerCategoryPerArea %>% filter(Category=="Misdemeanor") 

MisdemeanorStats$MisdemeanorPerCapitaPerArea <-((CrimePerCategoryPerArea%>% filter(Category=="Misdemeanor"))[[3]]/population_data$tpop20)*1000

MisdemeanorStats[56,] <- list("Unassigned -- Jail","Misdemeanor",0,0)

Community_data <- Community_data %>% 
  left_join(FelonyStats[,-c(2:3)],by="Community") %>%
  left_join(MisdemeanorStats[,-c(2:3)],by="Community")

As mentioned earlier, it is also possible to divide the crimes committed in Baltimore by ‘type’ of crime. A distinction is generally made between property crime and violent crime. In a property crime, a victim’s property is stolen or destroyed, without the use or threat of force against the victim. Property crimes include burglary and theft as well as vandalism and arson. In a violent crime, a victim is harmed by or threatened with violence. Violent crimes include rape and sexual assault, robbery, assault and murder.

In order determine whether the crimes contained in our crime_data_with_area. We will use a data set once again provided by the Baltimore open data portal. This data set provides information about the crime codes used by the police to categorize crimes. We first import the data set. Then, we compare whether codes are well and truly similar, three crime codes are written with an extra blank space afterward. We correct that. Then, suing the left_join function, we add a new column to our crime_data_with_area data frame. We then wish to create data frames for both violent and property crime. The methodology is the same as we used for felonies and misdemeanors.

crimecode_data <- read.csv(file = here::here("data/Balt_CRIME_CODES.csv"))

unique(crime_data_with_areas$CrimeCode) %in% unique(crimecode_data$CODE) #We identify spelling errors

crimecode_data$CODE[185] <- "8H"
crimecode_data$CODE[186] <- "8I"
crimecode_data$CODE[187] <- "8J"

crime_data_with_areas <- crime_data_with_areas %>% 
  left_join(crimecode_data[,c(1,8)],by=c("CrimeCode"="CODE"))

unique(crime_data_with_areas$VIO_PROP_CFS)
which(is.na(crime_data_with_areas$VIO_PROP_CFS)) #We ensure that we have no NAs

CrimePerCategory2PerArea <- crime_data_with_areas %>% 
  group_by(Community,VIO_PROP_CFS) %>%
  summarize(RepartitionPerCategory2PerArea=n())

sum(CrimePerCategory2PerArea$RepartitionPerCategory2PerArea) #Again, we check that we indeed have 349482 observations

CrimeCategory2Repartition <- CrimePerCategory2PerArea %>% 
  group_by(VIO_PROP_CFS) %>% 
  summarise(Repartition=sum(RepartitionPerCategory2PerArea))

PropertyStats <-  CrimePerCategory2PerArea %>% filter(VIO_PROP_CFS=="PROPERTY") 

PropertyStats$PropertyCrimePerCapitaPerArea <-((CrimePerCategory2PerArea%>% filter(VIO_PROP_CFS=="PROPERTY"))[[3]]/population_data$tpop20)*1000

PropertyStats[56,] <- list("Unassigned -- Jail","PROPERTY",0,0)

ViolentStats <-  CrimePerCategory2PerArea %>% filter(VIO_PROP_CFS=="VIOLENT") 

ViolentStats$ViolentCrimePerCapitaPerArea <-((CrimePerCategory2PerArea%>% filter(VIO_PROP_CFS=="VIOLENT"))[[3]]/population_data$tpop20)*1000

ViolentStats[56,] <- list("Unassigned -- Jail","PROPERTY",0,0)

Community_data <- Community_data %>% 
  left_join(ViolentStats[,c(1,4)],by="Community") %>% 
  left_join(PropertyStats[,c(1,4)],by="Community")

3.3.1 Mapping of felonies and Misdemeanors

After ensuring that we have a perfect match we perform a left joint for felony and misdemeanor and map everything.

#Felony

baltimore$community %in% FelonyStats$Community

baltimore@data <- left_join(baltimore@data, FelonyStats, by = c('community' = 'Community'))

Felony_map <- tm_shape(baltimore) + tm_fill(col = "FelonyPerCapitaPerArea", title ="Felony per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)

Felony_map

#Misdemeanor

baltimore$community %in% MisdemeanorStats$Community

baltimore@data <- left_join(baltimore@data, MisdemeanorStats, by = c('community' = 'Community'))

Misdemeanor_map <- tm_shape(baltimore) + tm_fill(col = "MisdemeanorPerCapitaPerArea", title ="Misdemeanor per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)

Misdemeanor_map

3.4 Calculation of crime evolution

The idea is that we want to get information about how crime evolved. Here we could have done a loop, but could not yet find a way to properly do it. We have created a data set for each year. The results are interesting. If we compare how many observations we have in each crime-per year data sets, we see that we have ~40.000ish cases a year except from 2020 (which is due to COVID) and the year 2021 (which is not finished. We don’t make any datasets for the year 2013 and below, because we see that we have not many observations which date prior to the year 2013. The graph represent the monthly evolution of crime for each year. We see that there seems to be a sort of pattern and that, each year, crime increases mid-year before decreasing in december.

Crime_in_2021 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2021-01-01") & CrimeDateTime <= as.Date("2021-12-31"))

Crime_in_2020 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2020-01-01") & CrimeDateTime <= as.Date("2020-12-31"))

Crime_in_2019 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2019-01-01") & CrimeDateTime <= as.Date("2019-12-31"))

Crime_in_2018 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2018-01-01") & CrimeDateTime <= as.Date("2018-12-31"))

Crime_in_2017 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2017-01-01") & CrimeDateTime <= as.Date("2017-12-31"))

Crime_in_2016 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2016-01-01") & CrimeDateTime <= as.Date("2016-12-31"))

Crime_in_2015 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2015-01-01") & CrimeDateTime <= as.Date("2015-12-31"))

Crime_in_2014 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2014-01-01") & CrimeDateTime <= as.Date("2014-12-31"))

crime_data_with_areas %>%  filter(CrimeDateTime < as.Date("2014-01-01")) #We see that we have very few (76) observations before 2014, thus we do not consider them

Crime_Monthly_evolution_map <- crime_data_with_areas %>% 
  count(month=floor_date(CrimeDateTime,"month")) %>% 
  ggplot(aes(month,n))+geom_line()+
  scale_x_date(limits = c(as.Date("2014-01-01"), as.Date("2021-08-31"))) #This enables us to see how crime evolve, month after month

Crime_Monthly_evolution_map

Next, we calculate the crime per capita for each year with the piping operator, grouping by community and summarize the rates. In the end we create the crime evolution data sets which is a combination of all the data.

#_____ Calculations of the crime rates

CrimePerCapitaPerArea2021 <- Crime_in_2021 %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency21=n())

CrimePerCapitaPerArea2021 <-  mutate(CrimePerCapitaPerArea2021,CrimePer1000inhabitants21=((CrimePerCapitaPerArea2021$CrimeFrequency21/population_data$tpop20)*1000))

CrimePerCapitaPerArea2021 <- rbind(CrimePerCapitaPerArea2021,list("Unassigned -- Jail",0,0))

CrimePerCapitaPerArea2020 <- Crime_in_2020 %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency20=n())

CrimePerCapitaPerArea2020 <-  mutate(CrimePerCapitaPerArea2020,CrimePer1000inhabitants20=((CrimePerCapitaPerArea2020$CrimeFrequency20/population_data$tpop20)*1000))

CrimePerCapitaPerArea2020 <- rbind(CrimePerCapitaPerArea2020,list("Unassigned -- Jail",0,0))

CrimePerCapitaPerArea2019 <- Crime_in_2019 %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency19=n())

CrimePerCapitaPerArea2019 <-  mutate(CrimePerCapitaPerArea2019,CrimePer1000inhabitants19=((CrimePerCapitaPerArea2019$CrimeFrequency19/population_data$tpop20)*1000))

CrimePerCapitaPerArea2019 <- rbind(CrimePerCapitaPerArea2019,list("Unassigned -- Jail",0,0))

CrimePerCapitaPerArea2018 <- Crime_in_2018 %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency18=n())

CrimePerCapitaPerArea2018 <-  mutate(CrimePerCapitaPerArea2018,CrimePer1000inhabitants18=((CrimePerCapitaPerArea2018$CrimeFrequency18/population_data$tpop20)*1000))

CrimePerCapitaPerArea2018 <- rbind(CrimePerCapitaPerArea2018,list("Unassigned -- Jail",0,0))

CrimePerCapitaPerArea2017 <- Crime_in_2017 %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency17=n())

CrimePerCapitaPerArea2017 <-  mutate(CrimePerCapitaPerArea2017,CrimePer1000inhabitants17=((CrimePerCapitaPerArea2017$CrimeFrequency17/population_data$tpop20)*1000))

CrimePerCapitaPerArea2017 <- rbind(CrimePerCapitaPerArea2017,list("Unassigned -- Jail",0,0))

CrimePerCapitaPerArea2016 <- Crime_in_2016 %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency16=n())

CrimePerCapitaPerArea2016 <-  mutate(CrimePerCapitaPerArea2016,CrimePer1000inhabitants16=((CrimePerCapitaPerArea2016$CrimeFrequency16/population_data$tpop20)*1000))

CrimePerCapitaPerArea2016 <- rbind(CrimePerCapitaPerArea2016,list("Unassigned -- Jail",0,0))

CrimePerCapitaPerArea2015 <- Crime_in_2015 %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency15=n())

CrimePerCapitaPerArea2015 <-  mutate(CrimePerCapitaPerArea2015,CrimePer1000inhabitants15=((CrimePerCapitaPerArea2015$CrimeFrequency15/population_data$tpop20)*1000))

CrimePerCapitaPerArea2015 <- rbind(CrimePerCapitaPerArea2015,list("Unassigned -- Jail",0,0))

CrimePerCapitaPerArea2014 <- Crime_in_2014 %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency14=n())

CrimePerCapitaPerArea2014 <-  mutate(CrimePerCapitaPerArea2014,CrimePer1000inhabitants14=((CrimePerCapitaPerArea2014$CrimeFrequency14/population_data$tpop20)*1000))

CrimePerCapitaPerArea2014 <- rbind(CrimePerCapitaPerArea2014,list("Unassigned -- Jail",0,0))

crime_evolution <- CrimePerCapitaPerArea2021 %>% 
  left_join(CrimePerCapitaPerArea2020,by="Community") %>% 
  left_join(CrimePerCapitaPerArea2019,by="Community") %>%
  left_join(CrimePerCapitaPerArea2018,by="Community") %>%
  left_join(CrimePerCapitaPerArea2017,by="Community") %>% 
  left_join(CrimePerCapitaPerArea2016,by="Community") %>% 
  left_join(CrimePerCapitaPerArea2015,by="Community") %>% 
  left_join(CrimePerCapitaPerArea2014,by="Community")

Community_data <- Community_data %>% 
  left_join(crime_evolution,by="Community")

Another interesting way to visualise how crime evolved is by using an animated map. We can create animated maps using the tmap_animation function. Yet, in order to be in position to use it, we have to create a very particular tibble. In the case at hand, we want our animated map to display crime per capita evolution over 7 years (from 2014 to 2020, we get ride of 2021 as the year is not complete). Therefore, we must have 7 x 56 observations, one crime per capita value for each year, for each 56 area. Yet, the tibble becomes a bit more peculiar as for each observation, we have to add a in a separate column, a polygon (which is an S4 element) corresponding to the area in question. It is not possible to use a function like the rep function to replicate S4 elements, therefore, we had to do that manually.

Once the tibble is built, we want to merge the data contained in it in a SpatialPolygonsDataFrame. We want to use the baltimore SpatialPolygonsDataFrame.However, as the tibble contains 392 observations, this will enlarge our our SpatialPolygonsDataFrame. As the baltimore object is also used for other purposes, we create an alias. Then, we merge the newly created tibble with the newly created alias, simply using left_join. We create the bbox object as well as an object called pb. The first element allows us to delimit the geographical area of interest and the second allows us to create custom classes. Finally, we crate a map using the tm_shape function. We animate the latter using tmap_animation.

anim_tibble <-  tibble(Year=rep(2020:2014,56),Community=rep(Community_data$Community,each=7),CrimeRate=as.vector(t(crime_evolution[,-c(1,2,3,4,6,8,10,12,14,16)])),geometry=list(
  baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],
  baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],
  baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],
  baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],
  baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],
  baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],
  baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],
  baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],
  baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],
  baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],
  baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],
  baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],
  baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],
  baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],
  baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],
  baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],
  baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],
  baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],
  baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],
  baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],
  baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],
  baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],
  baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],
  baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],
  baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],
  baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],
  baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],
  baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],
  baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],
  baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],
  baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],
  baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],
  baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],
  baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],
  baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],
  baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],
  baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],
  baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],
  baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],
  baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],
  baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],
  baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],
  baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],
  baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],
  baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],
  baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],
  baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],
  baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],
  baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],
  baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],
  baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],
  baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],
  baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],
  baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],
  baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],
  baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]]))

baltimore_alias <- baltimore

baltimore_alias@polygons <- anim_tibble$geometry

baltimore_alias@data$community %in% anim_tibble$Community #Again, we ensure that we have a perfect match

baltimore_alias@data <-left_join(baltimore_alias@data,anim_tibble,by = c('community' = 'Community'))

bbox <- baltimore@bbox
pb <-  c(0,25,50,75,100,125,150,175,200,225,250)

animated_crime_map <- tm_shape(baltimore_alias,bbox = bbox, projection = crs.geo1) +
  tm_polygons("CrimeRate",breaks=pb) +
  tm_facets(free.scales.fill = F,along = "Year")+tm_shape(baltimore)+tm_borders()

tmap_animation(animated_crime_map, delay=100)

3.5 Internet access and crimes

First, we need to merge the data in one big file. Here every datasets needs one colums which is named in the same way (e.g. Community). Here we create a table with the following colums: Community Statistical Area, Internet accessibility CSA,CCTV per area, CrimeStatsPerArea,crime_data_with_areas, FelonyStats, MisdemeanorStats, . Merging these files in one excel will help up to get a overview table and enable us to do some regressions to generate meaningful output of it. Here we simply merge the files by their same column, namely the community statistical area.

Analysis

4.1 Crime VS CCTVs - Does the presence of CCTV deter crime?

The idea here is to see whether there is any relationship between crime per capita and CCTV density. To do so, we first create a simple linear regression model. We create a new data frame called CCTV_VS_crimes (which basically is a left joint). The linear regression indicates a rather weak correlation between higher crime per capita and higher CCTV density. The \(R^2\) is at 35%. Still, plotting the observations enables one to see a tendency. The blue line represent the regression line.

#> 
#> Call:
#> lm(formula = CCTV_VS_crimes$density_perc ~ CCTV_VS_crimes$CrimePer1000inhabitants)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -3.995 -1.043 -0.394  0.789  4.756 
#> 
#> Coefficients:
#>                                         Estimate Std. Error t value
#> (Intercept)                            -1.162195   0.520924   -2.23
#> CCTV_VS_crimes$CrimePer1000inhabitants  0.004710   0.000739    6.37
#>                                        Pr(>|t|)    
#> (Intercept)                                0.03 *  
#> CCTV_VS_crimes$CrimePer1000inhabitants  4.3e-08 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 1.79 on 54 degrees of freedom
#> Multiple R-squared:  0.429,  Adjusted R-squared:  0.419 
#> F-statistic: 40.6 on 1 and 54 DF,  p-value: 4.26e-08

4.1.1 Mapping of CCTVs and crime, felony and misdemeanor per capita

In these section we engage with the mapping of the CCTVs and crimes. The method is the same as before with the tmap package. However, this time we have two different shapes: tm_shape(baltimore) which constitutes the base map and tm_shape(balt_dat) which adds a layer containing points. If we take a look at this map we see that it gives an intuition about the phenomenon we illustrated before. It seems as if where crime per capita is the lowest, there seems to be less CCTVs (for instance in the north area of the city or even in the western areas). There seems to be a correlation between the dark red areas and the CCTV per area.

Crime_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "CrimePer1000inhabitants", title ="Crime per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)+ tm_shape(balt_dat) + tm_dots(col="black")

Felony_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "FelonyPerCapitaPerArea", title ="Felony per capita per Area in %", style = "quantile") + tm_borders(col="black",alpha=0.3)+ tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black")

Misdemeanor_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "MisdemeanorPerCapitaPerArea", title ="Misdemeanor per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3)+ tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black")

tmap_mode("view") #Use this command to have interactive maps

baltimore@data[["fid"]]<-baltimore@data[["community"]] #We do that so that we see the name of the Community when using an interactive map

tmap_arrange(Crime_and_CCTV_map,Felony_and_CCTV_map,Misdemeanor_and_CCTV_map)























This map shows quite well that CCTV placement seems to follow the areas where crime per capita is the highest. Looking at the north-western and south-western areas of the map, it can be seen that the placement of CCTVs aligns rather well with the areas considered dangerous.

Crime_per_capita_VS_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "CrimePer1000inhabitants", title ="Crime per capita",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black") 

tmap_mode("plot")
Crime_per_capita_VS_CCTV_map

We are still not sure whether we should use the automatic breaking feature of tmap of whether we should set personalized map breaks. The following chunk illustrates how we could create personalized break arguments.

sort(baltimore@data[["CrimePer1000inhabitants"]])

breaks1 <- c(0,250,500,750,1000,1250,1500) #Not sure what break to use, for the moment I decided to use the automatic break system with the "quantile" parameter

tmap_mode("plot") #We go back to classic plotting

4.1.2 Analysis of where crime took place: August 2021

We are still trying to see whether the presence of CCTV can deter crime. One thing that could be interesting is to spatially locate crime and compare it to CCTV location. We know that CCTCs capture activities within 256ft (~2 blocks). We will only select crime committed in August 2021 to have intereprtable data (choosing a larger time frame would make the map unreadable). We choose August 2021 because it is the latest full month which we have in our data set. Taking the latest time points from the data assures us that most of the CCTVs presented in the data set were already there (since we have no information of when exactly these CCTVs were added). Again, as before, we create a data table, assign coordinates, define CRS (in this case the CRS is “EPDS4326”, which we needed to transform using spTransform). Again, we create a map with tm_shape to visualise the results. The output shows where crime takes place compared to the CCTV location. By zooming on the map, we see that some crimes are committed directly in front of CCTVs. Although this is not conclusive evidence, this observation goes against the idea that CCTVs are effective crime deterrents.

crime_spatial <- as.data.table(crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2021-08-01") & CrimeDateTime <= as.Date("2021-08-31")))
coordinates(crime_spatial) <-  c("Longitude","Latitude")
proj4string(crime_spatial) <-  CRS("+init=epsg:4326")
crime_spatial <- spTransform(crime_spatial,crs.geo1)

August21Crimes_VS_CCTV <- tm_shape(baltimore) + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.1, title="Crimes committed in August 2021 VS CCTV location",frame.lwd = 5)+ tm_shape(balt_dat) + tm_dots(col="black")+tm_shape(crime_spatial)+tm_dots(col="red",alpha=0.5)

#It could be interesting to see where crime took place relative to CCTV locations in the area with the highest crime rate in August 2021

tmap_mode("view") #Use this command to have interactive maps
August21Crimes_VS_CCTV

To further prove that point we decided to focus on one specific area. We arbitrarily decided to focus on the area which had the highest crime incidence in August. Thus, we needed to calculate the crime rate for August per area to see where crime was highest. The results show that they are in Downtown. So following this we take a closer look at the Downtown area.

CrimePerCapitaPerAreaAugust2021 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2021-08-01") & CrimeDateTime <= as.Date("2021-08-31")) %>% 
  group_by(Community) %>%
  summarize(CrimeFrequency=n())

CrimePerCapitaPerAreaAugust2021 <-mutate(CrimePerCapitaPerAreaAugust2021,CrimePer1000inhabitants=((CrimePerCapitaPerAreaAugust2021$CrimeFrequency/population_data$tpop20)*1000))
#We see that Downtown is the area with the highest crime rate in August 2021, we might want to focus on that area and see whether there is crime that take place directly next to CCTVs

In order to create a “sub-map”, we create a smaller area using the st-bbox function. The values indicated in the function represent the most extreme values on the x-axis and y-axis of a map using the EPSG:3857 WGS 84 / Pseudo-Mercator coordinate system. Then, using tm_shape with the new spatial file called Downtown_areaas argument, we create a map in the same way as we have done before. As we want to be able to locate this smaller area in the picture map of baltimore, we must also create a Baltimore map with a rectangle representing the newly created “sub-area”. In order to combine these two maps together, we run the two last lines together and use the viewport function. The output is a zoom on the desired area combined with the bigger map having a rectangle over the area which we are looking at and analyzing. In Downtown, we quite clearly see that some crimes are committed right next to some CCTVs.

Downtown_area <-  st_bbox(c(xmin = -8531335.08, xmax = -8526873.06,
                      ymin =4765236.47, ymax = 4762527.65),
                    crs = st_crs(baltimore)) %>% st_as_sfc()
 
Downtown_map <- tm_shape(Downtown_area) + tm_borders(col="white")+ tm_shape(baltimore) + tm_borders(col="black") + tm_layout(inner.margins = 0.05,frame.lwd = 5,title = "Zoom on Downtown Area",title.position = c('left', 'top'))+tm_scale_bar(position = c("left", "top"))+ tm_shape(balt_dat) + tm_symbols(shape = 2, col = "black", size = 0.07)+tm_shape(crime_spatial)+tm_dots(col="red")

Baltimore_map_2 <- tm_shape(baltimore) + tm_borders()+ tm_shape(Downtown_area) + tm_borders(lwd = 1.5,col = "red") + tm_layout(frame.lwd = 6,inner.margins = 0.05)

tmap_mode("plot")
Downtown_map
print(Baltimore_map_2, vp = viewport(0.8, 0.27, width = 0.5, height = 0.5)) #By running these two lines together, we obtain the map with an additional overview

4.1.3 Anomaly #1 : Prison

At the moment, we did not have enough time to focus on all the elements that seems to be outliers. Still, we had time to focus on Baltimore’s prison, which seems to be interesting. The methodology to create this map is the same as described above. It is interesting to analyse the prison and its surrounding area, since we have many CCTVs around it (which intuitively make sense), high crime per capita around (as in most central areas) but basically no crime inside it as we have no data about it.

tmap_mode("plot")

Prison_area <-  st_bbox(c(xmin = -8529169.92, xmax = -8526465.97,
                      ymin =4764196.55, ymax = 4765056.50),
                    crs = st_crs(baltimore)) %>% st_as_sfc()
 
Prison_map <- tm_shape(Prison_area) + tm_borders(col="black",alpha=0.3)+ tm_shape(baltimore) + tm_fill(col = "CrimePer1000inhabitants", title ="Crime per Capite per Area",style = "quantile") + tm_borders(col="black") + tm_layout(inner.margins = 0.05,frame.lwd = 5,title = "Zoom on Baltimore Prison",title.position = c('left', 'top'))+tm_scale_bar(position = c("left", "top"))+ tm_shape(balt_dat) + tm_dots(col="black") #This map zooms on the prison. This "Area" is special. We have no data on crime there, we can also see that the there is a huge concentration of CCTVs directly next to the prison.


Baltimore_map <- tm_shape(baltimore) + tm_borders()+ tm_shape(Prison_area) + tm_borders(lwd = 3,col = "red") + tm_layout(frame.lwd = 6,inner.margins = 0.05)


Prison_map
print(Baltimore_map, vp = viewport(0.8, 0.27, width = 0.5, height = 0.5)) #By running these two lines together, we obtain 

4.2 What types of crimes may be deterred by surveillance cameras?

The logic here is the same as in section 4.1, except that here,we want to see whether the presence of CCTV can deter a certain type of crime. We start with felonies and misdemeanors, then we analyse violent and property crimes.

4.2.1 CCTVs VS Felonies and Misdemeanors

The results of the simple linear regression shows a weak \(R^2\) for both felonies and misdemeanors and felonies. It therefore does not seem like the presence of CCTV has a particularly strong impact on a certain type of crime.

#> 
#> Call:
#> lm(formula = CCTV_VS_Felony$density_perc ~ CCTV_VS_Felony$FelonyPerCapitaPerArea)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -2.860 -1.225 -0.384  0.830  6.133 
#> 
#> Coefficients:
#>                                       Estimate Std. Error t value
#> (Intercept)                           -0.98925    0.54326   -1.82
#> CCTV_VS_Felony$FelonyPerCapitaPerArea  0.00961    0.00167    5.75
#>                                       Pr(>|t|)    
#> (Intercept)                              0.074 .  
#> CCTV_VS_Felony$FelonyPerCapitaPerArea  4.2e-07 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 1.87 on 54 degrees of freedom
#> Multiple R-squared:  0.38,   Adjusted R-squared:  0.369 
#> F-statistic: 33.1 on 1 and 54 DF,  p-value: 4.23e-07
#> 
#> Call:
#> lm(formula = CCTV_VS_misdemeanors$density_perc ~ CCTV_VS_misdemeanors$MisdemeanorPerCapitaPerArea)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -4.443 -1.011 -0.459  0.635  5.297 
#> 
#> Coefficients:
#>                                                  Estimate Std. Error
#> (Intercept)                                      -0.76212    0.49936
#> CCTV_VS_misdemeanors$MisdemeanorPerCapitaPerArea  0.00756    0.00129
#>                                                  t value Pr(>|t|)
#> (Intercept)                                        -1.53     0.13
#> CCTV_VS_misdemeanors$MisdemeanorPerCapitaPerArea    5.88  2.7e-07
#>                                                     
#> (Intercept)                                         
#> CCTV_VS_misdemeanors$MisdemeanorPerCapitaPerArea ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 1.85 on 54 degrees of freedom
#> Multiple R-squared:  0.39,   Adjusted R-squared:  0.379 
#> F-statistic: 34.5 on 1 and 54 DF,  p-value: 2.68e-07

#### 4.2.2 CCTVs VS Violent and Property Crime

#> 
#> Call:
#> lm(formula = CCTV_VS_ViolentCrime$density_perc ~ CCTV_VS_ViolentCrime$ViolentCrimePerCapitaPerArea)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -2.573 -1.030 -0.303  0.954  4.039 
#> 
#> Coefficients:
#>                                                   Estimate
#> (Intercept)                                       -1.11048
#> CCTV_VS_ViolentCrime$ViolentCrimePerCapitaPerArea  0.00961
#>                                                   Std. Error t value
#> (Intercept)                                          0.44086   -2.52
#> CCTV_VS_ViolentCrime$ViolentCrimePerCapitaPerArea    0.00127    7.59
#>                                                   Pr(>|t|)    
#> (Intercept)                                          0.015 *  
#> CCTV_VS_ViolentCrime$ViolentCrimePerCapitaPerArea  4.5e-10 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 1.65 on 54 degrees of freedom
#> Multiple R-squared:  0.516,  Adjusted R-squared:  0.507 
#> F-statistic: 57.6 on 1 and 54 DF,  p-value: 4.54e-10
#> 
#> Call:
#> lm(formula = CCTV_VS_PropertyCrime$density_perc ~ CCTV_VS_PropertyCrime$PropertyCrimePerCapitaPerArea)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -4.903 -1.103 -0.536  0.611  5.413 
#> 
#> Coefficients:
#>                                                     Estimate
#> (Intercept)                                         -0.67099
#> CCTV_VS_PropertyCrime$PropertyCrimePerCapitaPerArea  0.00757
#>                                                     Std. Error
#> (Intercept)                                            0.59553
#> CCTV_VS_PropertyCrime$PropertyCrimePerCapitaPerArea    0.00164
#>                                                     t value Pr(>|t|)
#> (Intercept)                                           -1.13     0.26
#> CCTV_VS_PropertyCrime$PropertyCrimePerCapitaPerArea    4.62  2.4e-05
#>                                                        
#> (Intercept)                                            
#> CCTV_VS_PropertyCrime$PropertyCrimePerCapitaPerArea ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 2.01 on 54 degrees of freedom
#> Multiple R-squared:  0.284,  Adjusted R-squared:  0.27 
#> F-statistic: 21.4 on 1 and 54 DF,  p-value: 2.4e-05

4.3 Comparison of CCTV density and wealth

We went to see whether there is a correlation between CCTV density and wealth. One of our initial hypothesis was that the government respected more the privacy of wealthier people. So, similarly, we perform a regression. The results here are not so conclusive, since we have a poor \(adjusted R^2\) and a poor \(R^2\). The next sub-section illustrates this in a map.

#> 
#> Call:
#> lm(formula = CCTV_VS_poverty$density_perc ~ CCTV_VS_poverty$hhpov19)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -2.629 -1.046 -0.582  0.468  9.600 
#> 
#> Coefficients:
#>                         Estimate Std. Error t value Pr(>|t|)    
#> (Intercept)               0.0523     0.4850    0.11     0.91    
#> CCTV_VS_poverty$hhpov19   0.1056     0.0244    4.33  6.5e-05 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 2.05 on 54 degrees of freedom
#> Multiple R-squared:  0.258,  Adjusted R-squared:  0.244 
#> F-statistic: 18.7 on 1 and 54 DF,  p-value: 6.54e-05

4.3.1 Mapping of CCTVs and wealth

The methodology to create the map is always the same: we ensure a perfect match, then merge the data using left_join and finally create the map using tmap. While the simple linear regression was not so conclusive, it seems like the map enables one to grasp interesting patterns. If we look at the map we see that at least those areas with no CCTVs are more likely to be quite wealthy. However, we are not sure whether this is the only influential factor here, thus we think it is rather correlated to crime per capita in these areas. Again, in the northern parts we see less CCTV, less crime, and also more wealthier population.

#>  [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> [14] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> [27] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> [40] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> [53] TRUE TRUE TRUE TRUE

4.4 Comparison of crimes and wealth

We want to investigate whether or not wealthier areas are more or less impacted by crime. To do so, we once more compute a simple linear regression. We again see that the \(R^2\) is quite poor.

#> 
#> Call:
#> lm(formula = Crime_VS_Poverty$CrimePer1000inhabitants ~ Crime_VS_Poverty$hhpov19)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -535.1 -148.0  -79.1  158.8 1050.6 
#> 
#> Coefficients:
#>                          Estimate Std. Error t value Pr(>|t|)    
#> (Intercept)                392.03      68.16    5.75  4.3e-07 ***
#> Crime_VS_Poverty$hhpov19    14.25       3.43    4.15  0.00012 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 288 on 54 degrees of freedom
#> Multiple R-squared:  0.242,  Adjusted R-squared:  0.228 
#> F-statistic: 17.3 on 1 and 54 DF,  p-value: 0.000117

4.5 Felonies VS Misdemeanors - Do we have an equal crime type distribution?

It is always interesting to see whether we can spot patterns in crime date. The idea here is to analyse whether we tend to observe an equal distribution of felony and misdemeanors in each area. By computing a simple linear regression, we see that the two types of crime seems rather equally distributed in each area. Still, it is interesting to observe that the biggest outline on the scatter plot is Downtown/Seton Hill. In Downtown, misdemeanor per capita is much larger than the felony per capita We don’t whether this finding is relevant, yet, it must be mentioned that this area also is one of the richest area in Baltimore. This might suggest the idea that richer areas are more impacted by less severe crimes.

#> 
#> Call:
#> lm(formula = Felony_VS_Misdemeanor$FelonyPerCapitaPerArea ~ Felony_VS_Misdemeanor$MisdemeanorPerCapitaPerArea)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -204.5  -59.2  -15.5   51.5  216.2 
#> 
#> Coefficients:
#>                                                   Estimate
#> (Intercept)                                        81.2351
#> Felony_VS_Misdemeanor$MisdemeanorPerCapitaPerArea   0.6162
#>                                                   Std. Error t value
#> (Intercept)                                          24.9863    3.25
#> Felony_VS_Misdemeanor$MisdemeanorPerCapitaPerArea     0.0644    9.57
#>                                                   Pr(>|t|)    
#> (Intercept)                                          0.002 ** 
#> Felony_VS_Misdemeanor$MisdemeanorPerCapitaPerArea  3.1e-13 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 92.8 on 54 degrees of freedom
#> Multiple R-squared:  0.629,  Adjusted R-squared:  0.622 
#> F-statistic: 91.7 on 1 and 54 DF,  p-value: 3.13e-13

4.6 Attempt to create a more accurate model: Multiple Regression

#> Start:  AIC=96.9
#> Community_data$density_perc ~ 1
#> 
#>                                               Df Sum of Sq RSS  AIC
#> + Community_data$ViolentCrimePerCapitaPerArea  1     157.3 147 58.2
#> + Community_data$hhpov19                       1      78.5 226 82.2
#> + Community_data$hsagov14                      1      61.6 243 86.2
#> + Community_data$nohhint19                     1      19.4 285 95.2
#> + Community_data$unempr19                      1      16.0 289 95.8
#> <none>                                                     305 96.9
#> + Community_data$ready13                       1       0.9 304 98.7
#> 
#> Step:  AIC=58.2
#> Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea
#> 
#>                            Df Sum of Sq RSS  AIC
#> + Community_data$hsagov14   1      8.25 139 57.0
#> <none>                                  147 58.2
#> + Community_data$unempr19   1      4.46 143 58.5
#> + Community_data$ready13    1      3.03 144 59.0
#> + Community_data$nohhint19  1      2.66 145 59.2
#> + Community_data$hhpov19    1      2.24 145 59.3
#> 
#> Step:  AIC=57
#> Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea + 
#>     Community_data$hsagov14
#> 
#>                            Df Sum of Sq RSS  AIC
#> + Community_data$unempr19   1      6.58 133 56.3
#> + Community_data$nohhint19  1      5.09 134 56.9
#> <none>                                  139 57.0
#> + Community_data$hhpov19    1      0.43 139 58.8
#> + Community_data$ready13    1      0.03 139 59.0
#> 
#> Step:  AIC=56.3
#> Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea + 
#>     Community_data$hsagov14 + Community_data$unempr19
#> 
#>                            Df Sum of Sq RSS  AIC
#> + Community_data$hhpov19    1      6.58 126 55.4
#> <none>                                  133 56.3
#> + Community_data$nohhint19  1      0.40 132 58.1
#> + Community_data$ready13    1      0.15 132 58.2
#> 
#> Step:  AIC=55.4
#> Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea + 
#>     Community_data$hsagov14 + Community_data$unempr19 + Community_data$hhpov19
#> 
#>                            Df Sum of Sq RSS  AIC
#> <none>                                  126 55.4
#> + Community_data$nohhint19  1     1.969 124 56.5
#> + Community_data$ready13    1     0.005 126 57.4
#> 
#> Call:
#> lm(formula = Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea + 
#>     Community_data$hsagov14 + Community_data$unempr19 + Community_data$hhpov19)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -3.010 -1.083 -0.269  0.946  3.436 
#> 
#> Coefficients:
#>                                             Estimate Std. Error
#> (Intercept)                                  0.83786    1.15304
#> Community_data$ViolentCrimePerCapitaPerArea  0.00856    0.00155
#> Community_data$hsagov14                     -0.02161    0.01413
#> Community_data$unempr19                     -0.12576    0.05540
#> Community_data$hhpov19                       0.04932    0.03023
#>                                             t value Pr(>|t|)    
#> (Intercept)                                    0.73    0.471    
#> Community_data$ViolentCrimePerCapitaPerArea    5.50  1.2e-06 ***
#> Community_data$hsagov14                       -1.53    0.132    
#> Community_data$unempr19                       -2.27    0.027 *  
#> Community_data$hhpov19                         1.63    0.109    
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 1.57 on 51 degrees of freedom
#> Multiple R-squared:  0.586,  Adjusted R-squared:  0.554 
#> F-statistic: 18.1 on 4 and 51 DF,  p-value: 2.66e-09
#> Community_data$ViolentCrimePerCapitaPerArea 
#>                                        1.67 
#>                     Community_data$hsagov14 
#>                                        1.34 
#>                     Community_data$unempr19 
#>                                        1.93 
#>                      Community_data$hhpov19 
#>                                        2.60
#> Start:  AIC=96.9
#> Community_data$density_perc ~ 1
#> 
#>                                                Df Sum of Sq RSS  AIC
#> + Community_data$ViolentCrimePerCapitaPerArea   1     157.3 147 58.2
#> + Community_data$CrimePer1000inhabitants        1     130.8 174 67.4
#> + Community_data$MisdemeanorPerCapitaPerArea    1     118.9 186 71.2
#> + Community_data$FelonyPerCapitaPerArea         1     115.8 189 72.1
#> + Community_data$PropertyCrimePerCapitaPerArea  1      86.4 218 80.2
#> + Community_data$hhpov19                        1      78.5 226 82.2
#> + Community_data$hsagov14                       1      61.6 243 86.2
#> + Community_data$nohhint19                      1      19.4 285 95.2
#> + Community_data$unempr19                       1      16.0 289 95.8
#> <none>                                                      305 96.9
#> + Community_data$ready13                        1       0.9 304 98.7
#> 
#> Step:  AIC=58.2
#> Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea
#> 
#>                                                Df Sum of Sq RSS  AIC
#> + Community_data$FelonyPerCapitaPerArea         1     28.75 119 48.1
#> + Community_data$hsagov14                       1      8.25 139 57.0
#> <none>                                                      147 58.2
#> + Community_data$CrimePer1000inhabitants        1      5.04 142 58.3
#> + Community_data$PropertyCrimePerCapitaPerArea  1      5.04 142 58.3
#> + Community_data$unempr19                       1      4.46 143 58.5
#> + Community_data$ready13                        1      3.03 144 59.0
#> + Community_data$nohhint19                      1      2.66 145 59.2
#> + Community_data$hhpov19                        1      2.24 145 59.3
#> + Community_data$MisdemeanorPerCapitaPerArea    1      0.00 147 60.2
#> 
#> Step:  AIC=48
#> Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea + 
#>     Community_data$FelonyPerCapitaPerArea
#> 
#>                                                Df Sum of Sq RSS  AIC
#> + Community_data$hhpov19                        1     12.85 106 43.6
#> <none>                                                      119 48.1
#> + Community_data$CrimePer1000inhabitants        1      4.15 114 48.1
#> + Community_data$PropertyCrimePerCapitaPerArea  1      4.15 114 48.1
#> + Community_data$MisdemeanorPerCapitaPerArea    1      4.15 114 48.1
#> + Community_data$hsagov14                       1      4.01 115 48.1
#> + Community_data$ready13                        1      1.24 117 49.5
#> + Community_data$unempr19                       1      0.06 119 50.0
#> + Community_data$nohhint19                      1      0.01 119 50.0
#> 
#> Step:  AIC=43.6
#> Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea + 
#>     Community_data$FelonyPerCapitaPerArea + Community_data$hhpov19
#> 
#>                                                Df Sum of Sq RSS  AIC
#> + Community_data$nohhint19                      1      4.59 101 43.2
#> <none>                                                      106 43.6
#> + Community_data$unempr19                       1      3.35 102 43.8
#> + Community_data$hsagov14                       1      0.42 105 45.4
#> + Community_data$ready13                        1      0.38 105 45.4
#> + Community_data$CrimePer1000inhabitants        1      0.11 106 45.6
#> + Community_data$PropertyCrimePerCapitaPerArea  1      0.11 106 45.6
#> + Community_data$MisdemeanorPerCapitaPerArea    1      0.11 106 45.6
#> 
#> Step:  AIC=43.2
#> Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea + 
#>     Community_data$FelonyPerCapitaPerArea + Community_data$hhpov19 + 
#>     Community_data$nohhint19
#> 
#>                                                Df Sum of Sq   RSS
#> <none>                                                      101.2
#> + Community_data$CrimePer1000inhabitants        1     1.681  99.6
#> + Community_data$PropertyCrimePerCapitaPerArea  1     1.681  99.6
#> + Community_data$MisdemeanorPerCapitaPerArea    1     1.681  99.6
#> + Community_data$hsagov14                       1     0.686 100.5
#> + Community_data$unempr19                       1     0.614 100.6
#> + Community_data$ready13                        1     0.099 101.1
#>                                                 AIC
#> <none>                                         43.2
#> + Community_data$CrimePer1000inhabitants       44.2
#> + Community_data$PropertyCrimePerCapitaPerArea 44.2
#> + Community_data$MisdemeanorPerCapitaPerArea   44.2
#> + Community_data$hsagov14                      44.8
#> + Community_data$unempr19                      44.8
#> + Community_data$ready13                       45.1
#> 
#> Call:
#> lm(formula = Community_data$density_perc ~ Community_data$ViolentCrimePerCapitaPerArea + 
#>     Community_data$FelonyPerCapitaPerArea + Community_data$hhpov19 + 
#>     Community_data$nohhint19)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -3.807 -0.928 -0.093  0.659  3.637 
#> 
#> Coefficients:
#>                                             Estimate Std. Error
#> (Intercept)                                 -0.09384    0.48849
#> Community_data$ViolentCrimePerCapitaPerArea  0.02596    0.00432
#> Community_data$FelonyPerCapitaPerArea       -0.02217    0.00533
#> Community_data$hhpov19                       0.07946    0.02682
#> Community_data$nohhint19                    -0.04050    0.02665
#>                                             t value Pr(>|t|)    
#> (Intercept)                                   -0.19  0.84842    
#> Community_data$ViolentCrimePerCapitaPerArea    6.01    2e-07 ***
#> Community_data$FelonyPerCapitaPerArea         -4.16  0.00012 ***
#> Community_data$hhpov19                         2.96  0.00463 ** 
#> Community_data$nohhint19                      -1.52  0.13471    
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 1.41 on 51 degrees of freedom
#> Multiple R-squared:  0.668,  Adjusted R-squared:  0.642 
#> F-statistic: 25.6 on 4 and 51 DF,  p-value: 1.13e-11
#> Community_data$ViolentCrimePerCapitaPerArea 
#>                                       16.00 
#>       Community_data$FelonyPerCapitaPerArea 
#>                                       17.96 
#>                      Community_data$hhpov19 
#>                                        2.55 
#>                    Community_data$nohhint19 
#>                                        2.00

4.67 Next steps after feedback on project update

  • Adjust accroding to the feedback
  • Create new models (potentially multiple linear regression ?)
  • Finalise interpetations and answer research questions
  • Compare results to other researches
  • Create some more visualisations (if useful and needed)
  • Include Executive summary at the beginning
  • Add additional data if needed
  • Create bibliography

Conclusion

  • Take home message

  • Limitations

  • the “CAM_NUM” column in the CCTV dataset suggests that not all CCTVS are included in our data set.

  • Future work?